Evaluating new techniques on realistic datasets plays a crucial role in the development of ML research and its broader adoption by practitioners. In recent years, there has been a significant increase of publicly available unstructured data resources for computer vision and NLP tasks. However, tabular data -- which is prevalent in many high-stakes domains -- has been lagging behind. To bridge this gap, we present Bank Account Fraud (BAF), the first publicly available privacy-preserving, large-scale, realistic suite of tabular datasets. The suite was generated by applying state-of-the-art tabular data generation techniques on an anonymized,real-world bank account opening fraud detection dataset. This setting carries a set of challenges that are commonplace in real-world applications, including temporal dynamics and significant class imbalance. Additionally, to allow practitioners to stress test both performance and fairness of ML methods, each dataset variant of BAF contains specific types of data bias. With this resource, we aim to provide the research community with a more realistic, complete, and robust test bed to evaluate novel and existing methods.
translated by 谷歌翻译
基于梯度提升决策树(GBDT)的机器学习(ML)算法在从医疗保健到金融的各种任务关键应用程序中的许多表格数据任务上仍然受到青睐。但是,GBDT算法并不能免于偏见和歧视性决策的风险。尽管GBDT的受欢迎程度和公平ML研究的迅速发展,但现有的经过处理的公平ML方法要么不适用GBDT,因此在大量的火车时间内开销,或者由于高级失衡的问题而不足。我们提出FairgBM,这是一个在公平限制下培训GBDT的学习框架,与无约束的LightGBM相比,对预测性能几乎没有影响。由于常见的公平指标是不可差异的,因此我们使用平滑的凸错误率代理采用``代理 - 拉格朗日''公式来实现基于梯度的优化。此外,与相关工作相比,我们的开源实施在训练时间中显示了一个数量级的加速顺序,这是一个关键方面,旨在促进现实世界实践者对FairgBM的广泛采用。
translated by 谷歌翻译
监视自动实时流处理系统的行为已成为现实世界应用中最相关的问题之一。这种系统的复杂性已在很大程度上依赖于高维输入数据和数据饥饿的机器学习(ML)算法。我们提出了一个灵活的系统,功能监视(FM),该系统在此类数据集中检测数据漂移,并具有较小且恒定的内存足迹和流应用程序中的小计算成本。该方法基于多变量统计测试,并且是由设计驱动的数据(从数据中估算了完整的参考分布)。它监视系统使用的所有功能,同时每当发生警报时提供可解释的功能排名(以帮助根本原因分析)。系统的计算和记忆轻度是由于使用指数移动直方图而导致的。在我们的实验研究中,我们用其参数分析了系统的行为,更重要的是显示了它检测到与单个特征无直接相关的问题的示例。这说明了FM如何消除添加自定义信号以检测特定类型问题的需求,并且监视功能可用空间通常足够。
translated by 谷歌翻译
近年来,机器学习算法在多种高风险决策应用程序中变得无处不在。机器学习算法从数据中学习模式的无与伦比的能力也使它们能够融合嵌入的偏差。然后,一个有偏见的模型可以做出不成比例地损害社会中某些群体的决策 - 例如,他们获得金融服务的机会。对这个问题的认识引起了公平ML领域,该领域的重点是研究,衡量和缓解算法预测的不公平性,相对于一组受保护的群体(例如种族或性别)。但是,算法不公平的根本原因仍然难以捉摸,研究人员在指责ML算法或训练的数据之间进行了划分。在这项工作中,我们坚持认为,算法不公平源于数据中模型与偏见之间的相互作用,而不是源于其中任何一个的孤立贡献。为此,我们提出了一种分类法来表征数据偏差,并研究了一系列关于公平盲目的ML算法在不同数据偏见设置下表现出的公平性准确性权衡的假设。在我们的现实帐户开放欺诈用例中,我们发现每个设置都需要特定的权衡,从而影响了预期价值和差异的公平性 - 后者通常没有注意到。此外,我们展示了算法在准确性和公平性方面如何根据影响数据的偏差进行比较。最后,我们注意到,在特定的数据偏见条件下,简单的预处理干预措施可以成功平衡小组错误率,而在更复杂的设置中相同的技术失败。
translated by 谷歌翻译
人类AI合作(HAIC)在决策中的合作旨在在人类决策者和AI系统之间建立协同团队。学会推迟(L2D)已作为一个有前途的框架,以确定人类中的谁和人工智能应采取哪些决定,以优化联合系统的性能和公平性。然而,L2D需要几个通常不可行的要求,例如,人类对每个实例的预测可用性,或独立于上述决策者的地面标签。此外,L2D和其他方法都没有解决在现实世界中部署HAIC的基本问题,例如能力管理或处理动态环境。在本文中,我们旨在识别和审查这些局限性和其他局限性,指出HAIC未来研究的机会可能会在哪里。
translated by 谷歌翻译
机器学习算法从数据中学习模式的无与伦比的能力也使它们能够融合嵌入的偏差。然后,一个有偏见的模型可以做出不成比例地损害社会中某些群体的决定。在静态ML环境中,大多数现实世界中大多数用例运行的动态预测案例都没有用于衡量静态ML环境中的不公平性。在后者中,预测模型本身在塑造数据的分布中起着关键作用。但是,很少注意将不公平与这些互动联系起来。因此,为了进一步理解这些环境中的不公平性,我们提出了一种分类法来表征数据中的偏见,并研究其由模型行为塑造的案例。以现实世界的开头欺诈检测案例研究为例,我们研究了表演性预测中两个典型偏见的性能和公平性的危险:分配变化以及选择性标签的问题。
translated by 谷歌翻译
Teaser: How seemingly trivial experiment design choices to simplify the evaluation of human-ML systems can yield misleading results.
translated by 谷歌翻译
洗钱是一个全球性问题,涉及严重重罪(每年1.7-4万亿欧元的收益,如毒品处理,人口贩运或腐败。金融机构部署的反洗钱系统通常包括与监管框架一致的规则。人类调查人员审查警报和报告可疑案件。这种系统患有高​​假阳性率,破坏其有效性并导致高运营成本。我们提出了一种机器学习分类模型,它补充了基于规则的系统,并学会准确地预测警报的风险。我们的模型使用实体的设计功能和属性以基于图形的特征​​的形式表征实体间关系。我们利用时间窗口来构建动态图形,优化时间和空间效率。我们在真实的银行数据集上验证我们的模型,并展示分流模型如何将误报的数量减少80%,同时检测到90%的真实阳性。通过这种方式,我们的模型可以显着改善反洗钱操作。
translated by 谷歌翻译
In the last years, the number of IoT devices deployed has suffered an undoubted explosion, reaching the scale of billions. However, some new cybersecurity issues have appeared together with this development. Some of these issues are the deployment of unauthorized devices, malicious code modification, malware deployment, or vulnerability exploitation. This fact has motivated the requirement for new device identification mechanisms based on behavior monitoring. Besides, these solutions have recently leveraged Machine and Deep Learning techniques due to the advances in this field and the increase in processing capabilities. In contrast, attackers do not stay stalled and have developed adversarial attacks focused on context modification and ML/DL evaluation evasion applied to IoT device identification solutions. This work explores the performance of hardware behavior-based individual device identification, how it is affected by possible context- and ML/DL-focused attacks, and how its resilience can be improved using defense techniques. In this sense, it proposes an LSTM-CNN architecture based on hardware performance behavior for individual device identification. Then, previous techniques have been compared with the proposed architecture using a hardware performance dataset collected from 45 Raspberry Pi devices running identical software. The LSTM-CNN improves previous solutions achieving a +0.96 average F1-Score and 0.8 minimum TPR for all devices. Afterward, context- and ML/DL-focused adversarial attacks were applied against the previous model to test its robustness. A temperature-based context attack was not able to disrupt the identification. However, some ML/DL state-of-the-art evasion attacks were successful. Finally, adversarial training and model distillation defense techniques are selected to improve the model resilience to evasion attacks, without degrading its performance.
translated by 谷歌翻译
Cybercriminals are moving towards zero-day attacks affecting resource-constrained devices such as single-board computers (SBC). Assuming that perfect security is unrealistic, Moving Target Defense (MTD) is a promising approach to mitigate attacks by dynamically altering target attack surfaces. Still, selecting suitable MTD techniques for zero-day attacks is an open challenge. Reinforcement Learning (RL) could be an effective approach to optimize the MTD selection through trial and error, but the literature fails when i) evaluating the performance of RL and MTD solutions in real-world scenarios, ii) studying whether behavioral fingerprinting is suitable for representing SBC's states, and iii) calculating the consumption of resources in SBC. To improve these limitations, the work at hand proposes an online RL-based framework to learn the correct MTD mechanisms mitigating heterogeneous zero-day attacks in SBC. The framework considers behavioral fingerprinting to represent SBCs' states and RL to learn MTD techniques that mitigate each malicious state. It has been deployed on a real IoT crowdsensing scenario with a Raspberry Pi acting as a spectrum sensor. More in detail, the Raspberry Pi has been infected with different samples of command and control malware, rootkits, and ransomware to later select between four existing MTD techniques. A set of experiments demonstrated the suitability of the framework to learn proper MTD techniques mitigating all attacks (except a harmfulness rootkit) while consuming <1 MB of storage and utilizing <55% CPU and <80% RAM.
translated by 谷歌翻译